perm filename JKNSF.SCR[PRO,JMC] blob sn#625566 filedate 1983-07-29 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00009 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	@style[linewidth 6 inches]
C00003 00003	@b[1.  INTRODUCTION]
C00014 00004	@b[2.  THE EKL SYSTEM]
C00023 00005	@b[3.  ALGORITHMS FOR PROOF CHECKING]
C00032 00006	@b[4.  THE USE AND MANIPULATION OF PROOFS]
C00035 00007
C00040 00008	@b[6.  GOALS]
C00050 00009
C00063 ENDMK
CāŠ—;
@style[linewidth 6 inches]
@style[indent 0, spacing 1.5, spread 0.5]
@style[topmargin 1 inch, bottommargin 1 inch, left margin 1.25 inches]
@style[indent 0]
@pageheading []
@style[widowaction force]
@blankspace[8 lines]
@begin[heading]
Proposed Research into Mechanical Theorem Proving

and

the Development of the EKL
Interactive Proof Checking System
@end[heading]
@blankspace[6 lines]
 
@begin[center]
Principal Investigator
John McCarthy


Research Associate
Jussi A. Ketonen
@end[center]
@blankspace[16 lines]

@begin[center]
Stanford University

November 1981
@end[center]
@newpage
@b[1.  INTRODUCTION]

This proposal requests funds for research into mechanical theorem
proving and development of the EKL, a new interactive proof checking
system at the Stanford Artificial Intelligence Laboratory. A manual for
EKL is included as Appendix A.

	The study of representation of facts and modes of reasoning has an
experimental as well as a theoretical aspect.  The long run test of the
usefulness of a means of representation or a mode of reasoning is its
contribution to the success of question-answering and problem-solving
programs.  However, this is a slow way of testing newly developed
concepts, because it often requires the creation of a whole new data base
format.  New concepts can be studied much more quickly if we can test
their consequences directly in relative isolation.

	The main tool in making a computer follow reasoning is an interactive
proof checker. We use this tool to formulate the facts involved in an
intellectual problem and check that our representation is adequate to
solve the problem. 

	In addition to its applications to AI, proof-checking is used to verify
that computer programs meet their specifications.

	The ability to construct a proof checker which accepts proofs at
the same level of detail as a human, and can fill in the gaps if asked,
would be de facto evidence that we are representing reasoning processes
with reasonable accuracy.  In this sense we view the proof checker as a
testing ground for our epistemological ideas in two ways.  One is we can
test whether the formalizations of facts about the world we discuss above
can be used to conclude the desired information.  Second is whether we
are getting the structure of the derivations correct by comparing the
machine checked versions to our informal ones.

	The EKL project was started about a year ago and has grown into
large and robust theorem proving system within that relatively short
span of time. It currently runs at SAIL (a KL-10 based system at 
the Stanford Computer Science Department), occupying about 200K
words of memory. It is written in MACLISP.
A transportable version (to all TOPS-20 systems)
exists and is being used by the students taking CS206, the LISP 
programmming course at Stanford.

	The main distinguishing characteristic of EKL from its spiritual 
predecessors, FOL [Weyhrauch 1977] and EXCHECK [Blaine 1980]
is in its flexibility and adaptability to different mathematical
environments. The emphasis has been on creating a system and a language
which would allow the expression and verification of mathematical 
facts in a direct and natural way. The development of
EKL has so far been heavily weighted in favor of expressibility and user 
friendliness as opposed to sophistication in decision procedures.
Our goal is to provide a friendly environment for formal manipulation.
Ultimately we would like to see EKL looking like an editor
with strong capabilities for symbolic computing and verification.

	We view good user engineering as our main challenge: It should
be a pleasure to construct a proof using the interactive 
theorem prover of the future. To this end we have provided many
proof editing primitives in EKL.  Operations for modifying 
previous lines, for goal structuring and
for copying/transferring/deleting lines from arbitrary places 
in the proof.
	  
	The approach taken has been quite successful: We have been able to
produce an elegant and eminently readable proof of Ramsey's theorem in under
100 lines. Ramsey's theorem has been used as a benchmark to test the power
of proof checking systems.  The previous known shortest computer-checked proof
is due to Bulnes who proved it in about 400 lines.

	The above result is important in that it indicates that
mathematical proofs previously thought to be too complex for 
mechanical verification are now within the reach of sufficiently powerful
proof checking systems. 

	We stress the fact that EKL is very much a system under 
development - it was started only a year ago.
We have a great deal to learn from other proof-checking systems.
Indeed, we view interactive theorem proving as an experimental science.
Our intention is to go through major proofs from several areas of
mathematics and computer science and see what difficulties arise
as a result from trying to formalize them.
This information, in turn, will be used to design the future EKL.
We also plan to use
some of the challenges posed by [Bledsoe 1977] to test the power of our
algorithms.

	A word about our general philosophy. We feel that the more
we succeed in imitating actual mathematical practice the better 
off we are. There is nothing new in any of the ideas presented here.

The role of logic is de-emphasized.  However, one can practice 
formal logic on this system just as one would practice any
other field of mathematics. Our stress is on the ease of 
formalization.  We see mathematics as a discipline that thrives on highly
abbreviated symbolic manipulations. 
We view the use of logic in such contexts as a relatively straightforward matter.
To handle this part a procedure to check the correctness of simple inferences
is required. For this purpose we have written a program that tests the
validity of deductions in a fragment of predicate calculus
designed to capture the notion of "trivial" inferences (discussed in more detail
in section 3 of this proposal).
We feel that this procedure is adequate for the purposes for which it was
intended.           

	Secondly, we need to be able to talk about formal functional application
and to create lambda-abstractions of terms. In this way we can fold into the 
language of discourse concepts previously thought meta mathematical in nature.

	Finally we need to formulate our axioms so as to facilitate
efficient symbol manipulation.
Here again the simple imitation of actual mathematical practice proves
to be a wise choice.
For example, to define a function as a set of ordered pairs is certainly
correct but atrocious from the point of view of efficiency and clarity in
notation. Similarly we could define the pairing axiom as

@begin[display,use I]
@Z[8]x y.@Z[9]z.@Z[8]u.(u@G[e]z  @#   (u=x)v(u=y))
@end[display]

which, though logically correct, clearly loses to

@begin[display,use I]
@Z[8]x y u.(u@G[e]{x,y}  @#   (u=x)v(u=y))
@end[display]

in expressive power.This again emphasizes the highly abbreviated nature
of mathematical reasoning and its efficient use of functional application.

@b[2.  THE EKL SYSTEM]
 
	EKL is an interactive proof checker and constructor.
EKL is based on simple, yet powerful, proof 
construction primitives that allow several "proof modules" to 
coexist simultaneously in core and permit arbitrary linkages between
such modules.  It is written in MACLISP.  In fact, the top level of EKL is
in Lisp.  The "commands" given to EKL are simply s-expressions
that are to be evaluated.
The user is given a set of routines to manipulate and access proofs;
they can be accessed only through their names, using the given EKL routines.
Thus EKL becomes automatically programmable - the
programming language is Lisp itself. Employing the EKL primitives any user
can write LISP programs to interact with EKL.  This is a way to implement
proof strategies of arbitrary degree of complexity in EKL and to "customize"
EKL for particular user requirements.


	The language of EKL can be extended all the way to finite order
predicate logic with typed @g(l)-calculus.
Each EKL atom has a type (a metatheoretic entity), sort,
and syntype which is either "variable", "constant" , "bindop"  or "functional".
The type of an EKL object tells how that object can be applied.
For example, an object of type ground @# ground can be applied to 
objects of type ground resulting in an object of type ground.
A term of type truthval is called a formula. All deduction rules 
manipulate terms of type truthval. EKL checks all terms for correct typing.
We also allow polymorphism in a style somewhat similar to Edinburgh LCF
[Gordon,Milner,and Wadsworth 1977].
Terms can have variable types and under certain circumstances the same atom
may be used to several functions.

	EKL is specifically designed to manipulate proofs and to handle
several proofs at the same time. A proof in EKL consists of lines.
Each line in a proof is a result of a command.
A state in EKL consists of the currently active proof,
the currently active context, and the currently active namecontext.
A context is simply a list declarations for atoms; they tell the syntactic
meaning of these atoms. Declarations are generated by means of the DECL 
function or automatically by EKL when no declaration is given by the user.
Each line has associated to it its context and dependencies.
If a line contains a formula, then its context is the set of all declarations
needed to make sense of that formula.
The currently active context is the cumulative subtotal of all
the context manipulation that has happened in the currently active proof.

	In a typical command several lines may be cited.
We first of all combine the contexts of the cited lines.
If an incompatibility turns up, the command is aborted.
This context is then combined with the previous active context;
all the incompatible declarations from the previous context are thrown out.
The resulting context is then used for parsing of terms 
and type computation in the command.
Thus if no context lines are cited, we default to the previous context.
This is sufficient most of the time.

	It follows that we can use conflicting declarations in different
parts of the same proof provided that we do not try to refer to
these lines within the same command.  The language that is used is
ultimately local to the line question. 
This locality allows us to build libraries of already proved
theorems. We can change notation and basic definitions as easily as a
mathematician can in switching from one book to another.

	Almost all the primitive commands of EKL run through the 
EKL re-writing routines: For example, all the EKL decision procedures 
are a part of the re-writing process. It is controlled by 
a language of regular expressions suggested by McCarthy.
We expect to develop this into
an extremely powerful facility for proving theorems in EKL.

	Typical commands in EKL range from more primitive operations
like ASSUME or @Z(8)E to complex instructions like TRW, DEVAL or LINK.
In a special class are the very popular editing operations
CHANGE, TRANSFER, DELETEL and COPYL. 
These are documented in more detail in Appendix A.
 
 
@b[3.  ALGORITHMS FOR PROOF CHECKING]

	We know,realistically speaking, what we 
may expect from a good general purpose theorem prover. We also
know the many undecidability results that limit
the "creative potential" of such programs.

	This should not detract from the fact that there are useful
things to be done in the field of algorithm development.  One has
reason to believe that somewhat
more sophisticated insights into the process of human theorem proving
may provide surprisingly good decision procedures in practice.

	Predicate logic has been traditionally viewed as the
place to formalize the mathematicians notion of a "tautologous inference."
Intuitively, it is clear that any "trivial" inference should be easily
formalizable and decidable in predicate logic. While we would like to
take issue with the question of the ease of formalization 
we shall concentrate now on the question of decidability. As is well known,
the full system of predicate calculus is not decidable; thus we are led
to study suitable fragments of predicate logic .
For the most trivial case, namely propositional logic, many sufficiently
fast algorithms are known; we have learned to live with exponential time bounds.
For example, the FOL system at Stanford Artificial Intelligence Laboratory
has an extremely efficient procedure, due to A.Chandra and R.Weyhrauch,
for checking propositional tautologies by compiling an 
optimized machine language program and then executing it.
The situation gets more complicated with fragments that employ 
quantifiers in a non-trivial way.
The most well known of these are monadic predicate calculus and various 
subclasses of formuli determined by quantifier prefixes.
All of the classes so far mentioned have obvious defects
from the point of view of mechanical proof checking.  Either they place
artificial limitations on the type of formuli or else the associated 
decision procedures cannot be efficiently implemented on a computer.
For example,[Lewis 1978] has shown that any decision procedure for 
monadic predicate calculus will (given the present state
of opinion about the P=NP problem) run in essentially double exponential time.
In practice this may cause insurmountable problems. 

	We have been working on a fragment
of predicate calculus that in our opinion does better than the ones
mentioned above. We do not restrict the syntactic character of formulas
that can be used. Instead we modify the rules of deduction within
Gentzen sequent calculus so as to attain a logic roughly analogous to context
free grammars.
The rules of this system are set up so that the complexity of the 
sequents in question can only increase in the course of deduction.
Intuitively, it attempts to characterize
all the formulas that can be deduced without using any "tricks".
In general, any line of argument that proceeds in a direct manner
from the hypotheses to conclusions can be formalized within this system.
Thus, for example, the formula 
@begin[display]
		      @Z[9]y.@Z[8]x.(A(y)@#A(x))
@end[display]

is not derivable in this 
system since the choice of @i[y] would indirectly depend on the @i[x] inside.
Indeed, most people seem to need some time before accepting the validity
of this statement. On the other hand, it can be shown that the equally 
suspect  formula
@begin[display]
             @Z[9]y.@Z[8]x.(A(y)@#A(x))v@Z(9)y.@Z[8]x.(A(y)@#A(x))
@end[display]

is derivable in our system. Clearly, we can still do better in 
characterizing the notion of "trivial inference."  
This is one of
our future areas of research.
The corresponding decision procedure has been implemented in its
most primitive form in EKL.
One may think of this procedure as the
dual to one described in [Martin Davis 1981].
It will be desribed in more detail in [Ketonen and R.Weyhrauch 1981].
It works in exponential time and seems quite sufficient for
most purposes. However, it has a hard time with equalities 
and complex lambda-terms. Improvement of this algorithm is one
of the items on our agenda. One of the improvements that we have in
mind is partial higher order unification algorithms.  Though it is
well known that unification is undecidable for higher order 
predicate calculus, one may again think of cases of special 
interest to us.

	The EKL procedures contain presently no heuristic reasoning.
We view this as a weakness resulting from our present inability 
to give clear and concise logical analysis of the most common 
heuristics that are used in mathematical reasoning.
We greatly admire the set of heuristics developed 
by [Boyer and Moore 1979A] for verifying the correctness of Lisp
programs. We would like to see how this approach 
could be transported to the highly interactive EKL environment and adapted to
the additional complications of quantification and higher types.
 
@b[4.  THE USE AND MANIPULATION OF PROOFS]
	
	One of the problems that we face in the field of mechanical
theorem proving is the fact that its use seems severely limited given
the current state of technology.  Our colleagues in the mathematics department
are not likely to be impressed by yet another proof of Ramsey's theorem.

	However, as [Kreisel 1981B] says: "Contrary to most elementary texts 
on logic, the heart of logical studies of proofs does not concern the
building up of formal derivations (by means of formal rules), but with 
their transformations".  He goes on to point out that most of those
transformations are purely mechanical in nature and quite tedious
to execute by pencil and paper. Thus we are in a position where, with a
modest investment of effort (and judicious choice of problems), we can
attain results that would seem impressive and in fact impossible 
to attain without a computer. [Goad 1980] has clearly demonstrated this.

	The advantage of cleanly presented "modular" proofs is their
easy availability for computational purposes.  Thus one can use the 
techniques of proof normalization to extract computational
information from the proofs in question. We intend to apply these
techniques to study variants of Ramsey's theorem and to the question 
of computing feasible bounds for the numbers occurring in Van der Waerden's 
theorem on arithmetic progressions. Once developed, these techniques can be
used to study program and proof transformations in general.
In fact, our very distant goal is to view the entire EKL top level
as a set of proof transformation routines.


@b[5.  METATHEORY]
 	
	The past few years have marked a new interest in metatheory
and metatheoretic reasoning, i.e., algorithmic knowledge about 
formal languages. The more notable efforts in this regard can be
found in [Boyer and Moore 1979B], [Davis and Schwartz 1977] and [Weyhrauch 1978].
Like Boyer and Moore we are less ambitious than Weyhrauch.  We have no intention
of discussing the entire range of formal reasoning within EKL. Our
goal is to come up with a relatively simple and straightforward
way for the user to express, use, and verify metatheoretic knowledge in EKL.

	Some of the knowledge commonly regarded as metatheoretic
can be naturally expressed in current EKL because of the existence of
higher order types. Thus facts about simple schemata are formulated
in terms of second order quantifiers. In fact, during our more 
heretical moments we are inclined to believe that all we need is 
a higher order predicate calculus together with 
a schematic language for formuli. This suspicion is bolstered by
the nearly uniform denial by practicing mathematicians that
they ever use what logicians call metatheory.  At present
all the metatheories that we have seen are too crude and simplistic
to capture even remotely the processes occurring in mathematics.
In any case, we see the necessity for building some kind of
a metafacility for EKL, probably in conjunction with some 
program verification primitives.

	We shall test the possible extensions of EKL
against theorems in mathematics which seem to use metatheoretic reasoning.
One example we would like to mention in this connection is Wilson's theorem:

@begin(display)
"If @i[p] is a prime number it divides @i[(p-1)!+1]." 
@end(display)

Thus, @i[5] divides @i[4!+1 = 25], and @i[7] divides @i[6!+1 = 721].
The standard proof is not at all obvious and requires
arranging the numbers between @i[1] and @i[p-1] in pairs such that the product of
each pair has remainder @i[1] when divided by @i[p].  Tying down the description
of this pairing requires the ability to handle sets or higher types neatly.
This may involve adding some schematic machinery to EKL.
The other well known way of proving Wilson's theorem involves factoring
polynomials over finite fields. This brings up, for example, the question of
the formalization of the natural proof and formulation of the unique
factorization theorem. More generally, we would like to be able to 
discuss and use algebraic simplification the way it is done in any
standard text book of algebra. [Aiello and Weyhrauch 1981] have done this 
in FOL using rather cumbersome metatheoretic machinery. Our belief is 
that we can go further by the simple use of higher types and lambda
abstraction.
 
 
@b[6.  GOALS]
 
	(1) Major projects:  We have students who are willing to take 
on major projects with EKL. EKL will be used during the Fall 1981      
CS206 (beginning Lisp) and winter CS256 (Mathematical Theory of
Computation) courses. We want to use EKL to verify the correctness
of major programs and to test EKL against some of [Bledsoe 1978] challenges.
 
	(2) Program verification:
A computer program is a mathematical object, and it has always seemed
disgraceful that programs are debugged, i.e. tried out on cases until
the programmer is satisfied or at least tired.  A program should be
proved to meet its specifications.
This would provide much greater assurance that
a program is correct than mere debugging.

Since the early 1960s it has been a major objective of the branch
of computer science called mathematical theory of computation to
provide formal systems within which proofs of program correctness
are easy to develop and no longer than the informal proofs.  This
goal has not been fully achieved for all classes of programs, but
enough progress has been made so that techniques of program proving
have been successfully introduced into programming courses as well
as in specialized courses in mathematical theory of computation.

	EKL has been and will be used for stating and proving facts
about programs. Currently EKL offers no primitives for making
this task easier and more pleasant. We are planning to remedy
this situation in the near future.

	As an application, we intend to prove the correctness
of the Lisp compiler LCOM4 mentioned in [McCarthy-Talcott 1980].

	(3) Continue the study of algorithms for typed predicate
calculus and partial higher order unification algorithms

	(4) To investigate and classify paradigms of informal
mathematical reasoning.

	(5) To develop a robust and usable metatheoretic facility
for EKL.
 
	(6) To develop the user interface of EKL further. We have been 
thinking of associating stronger display features with EKL. The recent
E/MACLISP interface developed by M.Frost and R.Gabriel
has made it possible to run EKL through the SAIL
E editor.

	(7) To develop the proof theoretic facilities of EKL with
eye for normalization and extraction of bounds.

@b[7.  [REFERENCES]

[Aiello and Weyhrauch 1981] Aiello, L. and Weyhrauch, R.,
@b(Using Metatheory to do Algebra), To appear.

[Blaine 1980] Blaine, L., @b(Programs for structure proofs), Ph.D. 
  Dissertation, Computer Science Department, Stanford University, 1980.

[Bledsoe 1977]  Bledsoe, W., @b(Non-resolution theorem proving), 
  @i(Artificial Intelligence, 3), 1-36, 1977.
 
[Boyer and Moore 1979A] Boyer, R.S. and Moore, J. S., @b(A computational logic),
  Academic Press, New York, 1979.

[Boyer and Moore 1979B] Boyer, R. S. and Moore, J. S., @b(Metafunctions:
  Proving them correct and using them efficiently as new proof procedures),
  SRI International, Technical Report CSL-108, 1979.

[Bulnes 1979] Bulnes, J., @b(Goal: a goal oriented command language for 
  interactive proof construction), Stanford AI Memo AIM-328, 1979.

[Davis 1981] Davis, M., @b(Obvious logical inferences), @i(Proceedings IJCAI,
  Vancouver), 530-53l, 1981.
 
[Davis and Schwartz 1977] Davis, M. and Schwartz, J., @b(Metamathematical
  extensibility for theorem verifiers and proof-checkers), Courant Computer
  Science Report No. 12, Courant Institute of Mathematical Sciences,
  New York, 1977.

[Goad 1980] Goad, C., @b(Proofs as descriptions of computations), 
  @i(Proceedings of Automatic Deduction Conference) Les Arcs (France),
  July 8-11, 1980.

 [Gordon, Milner and Wadsworth 1977] Gordon, M., Milner, R., and Wadsworth, C.,
   @b(Edinburgh LCF), Department of Computer Science Internal Report CSR-11-77,
  University of Edinburgh, 1977.
 
[Ketonen 1981A] Ketonen, J., @b(Efficient theorem proving in set theory),
  (to appear in @i(University-Level Computer-Assisted Instruction at Stanford:
  1968-1980), edited by Patrick Suppes, IMSSS, Stanford, 1981.)

[Ketonen 1981B] Ketonen, J., @b(On a decidable fragment of predicate calculus),
  (to appear in @i(University-Level Computer-Assisted Instruction at
  Stanford: 1968-1980), edited by Patrick Suppes, IMSSS, Stanford, 1981.)

[Ketonen and Weening 1981] Ketonen, J., and Weening, J., @b(EKL - An interactive
  proof checker), Users' Reference Manual, 40 pp., Stanford University, 1981.

[Ketonen and Weyhrauch 1981] Ketonen, J., and Weyhrauch, R., @b(A semidecision
  procedure for predicate calculus), (In preparation), 1981.

[Kreisel 1981A] Kreisel, G., @b(Neglected possibilities of processing
  assertions and proofs mechanically:  choice of problems and data),
  (to appear in @i(University-Level Computer-Assisted Instruction at Stanford:
  1968-1980), edited by Patrick Suppes, IMSSS, Stanford, 1981.)

[Kreisel 1981B] Kreisel, G., @b(Extraction of bounds:  some tricks of the
  trade), (to appear in @i(University-Level Computer-Assisted Instruction
  at Stanford: 1968-1980), edited by Patrick Suppes, IMSSS, Stanford, 1981.)


[Lewis 1978] Lewis, H.,@b(Complexity of Solvable Cases of the Decision
 Problem for the Predicate Calculus), 
@i(Proceedings of the IEEE Symposium on Foundations
of Computer Science), 35-47, 1978.

[McCarthy 1962] McCarthy, J., @b(Computer programs for checking 
  mathematical proofs), @i(Proc. Symp. Pure Math.), Vol. 5, American
  Mathematical Society, pp. 219-227, 1962.

[McCarthy 1967] McCarthy, J., @b(Correctness of a Compiler for Arithmetic
  expressions), @i(Proc. Symp. Applied Math.), Vol. 19, American Mathematical
  Society, pp. 33-4l, 1967.

[McCarthy and Talcott 1980] McCarthy, J., and Talcott, C., @b(LISP: programming
  and proving), (to appear, available as Stanford CS206 Course Notes, Fall 1980.)

[Weyhrauch 1977] Weyhrauch, R., @b(A users manual for FOL), Stanford AI Memo
  AIM-235.1, 1977.

[Weyhrauch 1978] Weyhrauch, R., @b(Prolegomena to a theory of mechanized
  formal reasoning), Stanford AI Memo AIM-315, 1978.
@newpage
 
@Center[Biography of Jussi Antero Ketonen]
@blankspace(3 lines)

Date of Birth:   April 3, 1952@*
Marital Status:  Married, one child@*
Citizenship:     U.S.

BA, MA, Mathematics, Abo Akademi (Finland), 1968@*
PhD, Mathematics, University of Wisconsin, 1971

@begin[description]
@Tabclear
@TabDivide[4]

1979-@\Stanford University@*
Research Associate@*
Stanford Artificial Intelligence Laboratory@*
Institute for Mathematical Studies in the Social Sciences

1977-79@\Stanford University@*
Visiting Associate Professor, Mathematics

1975-80@\University of Hawaii@*
Associate Professor, Mathematics

1972-75@\University of California, Berkeley@*
Miller Fellow, 1972-74@*
Research Associate, 1974-75

1971-72@\State University of New York, Buffalo@*
George William Hill Research Instructorship

1968-71@\University of Wisconsin, Madison@*
University of Wisconsin Fellowship
Fulbright travel grant
@end[description]

@begin[center]
Publications
@end[center]

Ketonen, Jussi (1971):  "Everything you wanted to know about ultrafilters,"
Doctoral dissertation, University of Wisconsin, 1971.

Ketonen, Jussi (1972):  "On non-regular ultrafilters,"
@i[Journal of Symbolic Logic, 3], 71-74, 1972.

Ketonen, Jussi (1972):  "Strong compactness and other cardinal sins,"
@i[Annals of Mathematical Logic, 5], 47-76, 1972.

Ketonen, Jussi (1973):  "Ultrafilters over measurable cardinals,"
@i[Fundamenta Mathematicae, 77], 257-269, 1973.

Ketonen, Jussi (1974):  "Some combinatorial principles,"
@i[Transactions of the American Mathematical Society, 188], 387-394, 1974.

Ketonen, Jussi (1974):  "Banach spaces and large cardinals,"
@i[Fundamenta Mathematicae, 81], 291-303, 1974.

Ketonen, Jussi (with M. Benda) (1974):  "On regularity of ultrafilters,"
@i[Israel Journal of Mathematics, l7], 231-240, 1974.

Ketonen, Jussi (1976):  "On the existence of P-points,"
@i[Fundamenta Mathematicae, 92], 91-94, 1976.

Ketonen, Jussi (1976):  "Non-regular ultrafilters and large cardinals,"
@i[Transactions of the American Mathematical Society, 224], 6l-73, 1976.

Ketonen, Jussi (1978):  "The structure of countable Boolean algebras,"
@i[Annals of Mathematics, 108], 41-89, 1978.

Ketonen, Jussi (1979)  "Open problems in the theory of ultrafilters,"
@i[Proceedigs of the Fourth Scandinavian Conference in Jyvaskyla],
edited by K.J.J. Hintikka, I. Niiniluoto, and, Saarinen.
D.Reidel, 227-247, 1979.

Ketonen, Jussi (1980)  "Some combinatorial properties of ultrafilters,"
@i[Fundamenta Mathematicae, 107], 225-235. 1980.

Ketonen, Jussi (with R Solovay) (1981)  "Rapidly growing Ramsey functions,"
@i[Annals of Mathematics, 113], 267-314, 1981.

Ketonen, Jussi (1981):  "On finitely axiomatizable theories."  (In preparation. 
This will be a revised version of a paper that already has been accepted
by the @i[Proceedings of the American Mathematical Society].) 1981.

Ketenon, Jussi (1980):  Isomorphism types of countable Boolean algebras."
(Manuscript submitted for publication,) 1980.

Ketonen, Jussi (1981):  "On a decidable fragment of predicate calculus."              
To appear in @i[University-Level Computer-Assisted Instruction at
     Stanford: 1968-1980], Edited by Patrick Suppes, IMSSS, Stanford, 1981.

Ketonen, Jussi (with Joe Weening) (1981):  "EKL- an interactive proof checker,"
Users' Reference Manual, 40pp., 1981.

Ketonen, Jussi (with Richard Weyhrauch) (1982):  "A semidecision procedure
for predicate calculus,"  (In preparation.) 1982.